Goto

Collaborating Authors

 proximity operator



45f31d16b1058d586fc3be7207b58053-Paper.pdf

Neural Information Processing Systems

We show that the matrix perspective function, which is jointly convex in the Cartesian product of a standard Euclidean vector space and a conformal space of symmetric matrices, has a proximity operator in an almost closed form.


Proximity Operator of the Matrix Perspective Function and its Applications

Neural Information Processing Systems

We show that the matrix perspective function, which is jointly convex in the Cartesian product of a standard Euclidean vector space and a conformal space of symmetric matrices, has a proximity operator in an almost closed form. The only implicit part is to solve a semismooth, univariate root finding problem. We uncover the connection between our problem of study and the matrix nearness problem. Through this connection, we propose a quadratically convergent Newton algorithm for the root finding problem.Experiments verify that the evaluation of the proximity operator requires at most 8 Newton steps, taking less than 5s for 2000 by 2000 matrices on a standard laptop. Using this routine as a building block, we demonstrate the usefulness of the studied proximity operator in constrained maximum likelihood estimation of Gaussian mean and covariance, peudolikelihood-based graphical model selection, and a matrix variant of the scaled lasso problem.




Appendix A More examples of optimality criteria and fixed points

Neural Information Processing Systems

F or fixed point iteration T . Newton's method for root-finding is T (x,θ) = x η [ Newton's method for optimization is obtained by choosing G (x,θ) = G( x,θ) is positive semi-definite. Proximal block coordinate descent fixed point. Clearly, when the step sizes are shared, i.e., We now show how to use the KKT conditions discussed in 2.2 to With our framework, no derivation is needed. However, since this LMO is piecewise constant, its Jacobian is null almost everywhere.



Supplement: Proximity Operator of the Matrix Perspective Function and its Applications Joong-Ho Won Department of Statistics Seoul National University wonj@stats.snu.ac.kr A Proofs A.1 A key lemma

Neural Information Processing Systems

Proofs of both Theorems 2 and 4 are based on the following key lemma, Lemma A.1. To prove this lemma, we begin by recalling the definition of directional derivatives. F (x + t h) F (x) t if the limit exists. Now we can prove the lemma: Proof of Lemma A.1. The following lemma shows a representation of an element of this set in terms of M: Lemma A.3.



From the Gradient-Step Denoiser to the Proximal Denoiser and their associated convergent Plug-and-Play algorithms

Herfeld, Vincent, de Senneville, Baudouin Denis, Leclaire, Arthur, Papadakis, Nicolas

arXiv.org Artificial Intelligence

In this paper we analyze the Gradient-Step Denoiser and its usage in Plug-and-Play algorithms. The Plug-and-Play paradigm of optimization algorithms uses off the shelf denoisers to replace a proximity operator or a gradient descent operator of an image prior. Usually this image prior is implicit and cannot be expressed, but the Gradient-Step Denoiser is trained to be exactly the gradient descent operator or the proximity operator of an explicit functional while preserving state-of-the-art denoising capabilities.